14 Stationarity of AR&MA Models

We review and explore stationarity of different models.

1 MA(q)

yt=μ+εt+θ1εt1++θqεtq,εti.i.dN(0,σ2).

2 AR(p)

(2.1)ytϕ1yt1ϕpytp=ϕ0+εt.

Now we discuss its stationarity.

2.1 p=1

Now ytϕ1yt1=ϕ0+εt.

  1. |ϕ1|<1: (2.2)yt=ϕ01ϕ1+j=0ϕ1jεtj. This is well-defined because |ϕ1|<1. εt is independent of yt1,yt2,. We refer this to causal stationary AR(1).
  2. |ϕ1|>1: (2.3)yt=ϕ01ϕ1j=0εt+jϕ1j. Now εt is not independent of yt1,yt2,, but independent of yt+1,yt+2,. Refer this to non-causal stationary AR(1).
  3. |ϕ1|=1: if ϕ1=1, ytyt1=ϕ0+εt, this means ytyt1 is Gaussian white noise. When ϕ0=0, this is Random Walk model. There is no stationary solution.

Now we use backshift notation to derive (2.2) and (2.3): Bkyt=ytk,B0=1. So AR(1) becomes ϕ(B)yt=ϕ0+εt, where ϕ(z)=1ϕ1z,ϕ(B)=1ϕ1B. So yt=1ϕ(B)(ϕ0+εt), and note that 1ϕ(B)=11ϕ1B=j=0ϕ1jBj, so yt=j=0ϕ1jBj(ϕ0)+j=0ϕ1jBj(εt)=j=0ϕ1j+j=0ϕ1jεtj. This only makes sense when |ϕ1|<1, so yt=ϕ01ϕ1+j=0ϕ1jεtj, this gives us (2.2).

And when |ϕ1|>1, note that 11r=1r11(1/r)=1r(1+1r+1r2+)=j=1rj.
So 1ϕ(B)=j=1Bjϕ1j, and 1ϕ(B)(ϕ0+εt)=ϕ0j=11ϕ1jj=1Bjεtϕ1j=ϕ01ϕ1j=1εt+jϕ1j which is (2.3).

1ϕ(B)(ϕ0+εt)=ϕ0j=11ϕ1jj=1Bjεtϕ1j=ϕ01ϕ1

To recap, (2.4)11ϕ1B=j=0ϕ1jBj,11ϕ1B=j=1Bjϕ1j.

2.2 p≥1

Still use backshift notation: ϕ(B)yt=ϕ0+εt,ϕ(B)=1ϕ1BϕpBp. To use (2.4), we need to factorize the polynomial ϕ(z)=1ϕ1zϕpzp=(1a1z)(1apz), so ϕ(B)=(1a1B)(1apB).
We then get yt=1(1a1B)(1apB)(ϕ0+εt)=k=1p11akB(ϕ0+εt)=k:|ak|<1(j=0akjBj)k:|ak|>1(j=1Bjakj)(ϕ0+εt).

  1. If every k, |ak|<1, then yt=k(j=0akjBj)(ϕ0+εt)=(j1=0jp=0a1j1apjpBj1++jp)(ϕ0+εt)=ϕ0j1=0jp=0a1j1apjp+j1=0jp=0a1j1apjpεtj1jp.
    This is a causal stationary solution. By collecting j1++jp=j for j=0,1,, yt=μ+j=0ψjεtj.
  2. If every k, |ak|>1, similarly yt=μ+j=ψjεtj.
  3. If every k, |ak|=1, there is no stationary solution.
Summary:

  • If |ak|1 for every k, there exists a unique stationary solution to (2.1).
  • If |ak|<1 for every k, the solution is causal.
  • If |ak|<1 for some k and |ak|>1 for other k, the solution is non-causal.

2.3 The Box-Jenkins Modeling Philosophy

3 ARMA(p, q) Models

Combine AR and MA models: (3.1)(ytμ)ϕ1(yt1μ)ϕp(ytpμ)=εt+θ1εt1++θqεtq.

When p=0, this is MA(q); when q=0, this is AR(p).

As usual εti.i.dN(0,σ2). The parameters here are μ,ϕ1,,ϕp,θ1,,θq,σ. In backshift notation: ϕ(B)(ytμ)=θ(B)εt, where ϕ(z)=1ϕ1zϕpzp,θ(z)=1+θ1z++θqzq.

It can be shown that: if ϕ(z) has all roots with modulus strictly larger than 1, then ARMA(p,q) has a stationary causal solution: yt=μ+ψ0εt+ψ1εt1+
Denote ψ(z)=ψ0+ψ1z+ψ2z2+, we can get coefficients by ψ(z)=θ(z)/ϕ(z), i.e. θ(z)=1+θ1z++θqzq=ϕ(z)ψ(z)=(1ϕ1zϕpzp)(ψ0+ψ1z+ψ2z2+), and then comparing coefficients of zj on both sides: 1=ψ0,θ1=ψ1ψ0ϕ1,θ2=ψ2ψ1ϕ1ψ0ϕ2,
ARMA(p,q) generalizes both AR(p) and MA(q) models. When p=q=0, we obtain the white noise model. When p=0, we get MA(q); when q=0, we get AR(p).
For ARMA, ACF and PACF are more complicated. Neither ACF nor PACF cuts off after a certain lag, if both p,q1.